Goto

Collaborating Authors

 keypoint detector





3198dfd0aef271d22f7bcddd6f12f5cb-Paper.pdf

Neural Information Processing Systems

Classical approaches are based on adetect-thendescribeparadigm where separate handcrafted methods areusedtofirstidentify repeatable keypoints and then represent them with a local descriptor.





R2D2: Repeatable and Reliable Detector and Descriptor

Neural Information Processing Systems

Classical approaches are based on a detect-then-describe paradigm where separate handcrafted methods are used to first identify repeatable keypoints and then represent them with a local descriptor.



DINO-VO: A Feature-based Visual Odometry Leveraging a Visual Foundation Model

Azhari, Maulana Bisyir, Shim, David Hyunchul

arXiv.org Artificial Intelligence

Learning-based monocular visual odometry (VO) poses robustness, generalization, and efficiency challenges in robotics. Recent advances in visual foundation models, such as DINOv2, have improved robustness and generalization in various vision tasks, yet their integration in VO remains limited due to coarse feature granularity. In this paper, we present DINO-VO, a feature-based VO system leveraging DINOv2 visual foundation model for its sparse feature matching. To address the integration challenge, we propose a salient keypoints detector tailored to DINOv2's coarse features. Furthermore, we complement DINOv2's robust-semantic features with fine-grained geometric features, resulting in more localizable representations. Finally, a transformer-based matcher and differentiable pose estimation layer enable precise camera motion estimation by learning good matches. Against prior detector-descriptor networks like SuperPoint, DINO-VO demonstrates greater robustness in challenging environments. Furthermore, we show superior accuracy and generalization of the proposed feature descriptors against standalone DINOv2 coarse features. DINO-VO outperforms prior frame-to-frame VO methods on the TartanAir and KITTI datasets and is competitive on EuRoC dataset, while running efficiently at 72 FPS with less than 1GB of memory usage on a single GPU. Moreover, it performs competitively against Visual SLAM systems on outdoor driving scenarios, showcasing its generalization capabilities.